64 research outputs found

    CAESAR8: an agile enterprise architecture approach to managing information security risks in business change projects

    Get PDF
    Implementing an Enterprise Architecture (EA) should enable organizations to increase the accuracy of information security risk assessments. Studies show that EAs provide an holistic perspective that improves information security risk management (ISRM). However, many organizations have been unable or unwilling to fully implement EA frameworks. The requirements for implementation of an EA can be unclear, the full benefits of many commercial frameworks is uncertain and the overheads of creating and maintaining EA artifacts considered unacceptable, especially for organizations following agile business change programs or having limited resource. Following the Design Science Research methodology, this thesis describes a comprehensive and multidisciplinary approach to design a new model that can be used for the dynamic and holistic reviews of information security risks in business change projects. The model incorporates five novel design principles that are independent of any existing EA framework, security standard or maturity model. This new model is called CAESAR8 - Continuous Agile Enterprise Security Architecture Review in 8 domains. CAESAR8 incorporates key ISRM success factors that have been determined from root cause analysis of information security failures. Combining systems thinking with agile values and lean concepts into the design has ensured that the impact of a change is considered holistically and continuously, prioritizing the EA process over the creation of EA artifacts. Inclusion of human behavioral-science has allowed the capture of diverse and often tacit knowledge held by different stakeholders impacted by a business change, whilst avoiding the dangers of groupthink. CAESAR8’s presentation of the results provides an impactive and easy-to-interpret metric that is designed to be shared with senior business executives to improve intervention decisions. This thesis demonstrates how CAESAR8 has been developed into a working prototype and presents case studies that describe the model in operation. A diverse group of experts were given access to a working IT prototype for a hands-on evaluation of CAESAR8. An analysis of their findings confirms the model’s novel scientific contribution to ISRM

    Current Knowledge and Considerations Regarding Survey Refusals: Executive Summary of the AAPOR Task Force Report on Survey Refusals

    Get PDF
    The landscape of survey research has arguably changed more significantly in the past decade than at any other time in its relatively brief history. In that short time, landline telephone ownership has dropped from some 98 percent of all households to less than 60 percent; cell-phone interviewing went from a novelty to a mainstay; address-based designs quickly became an accepted method of sampling the general population; and surveys via Internet panels became ubiquitous in many sectors of social and market research, even as they continue to raise concerns given their lack of random selection. Among these widespread changes, it is perhaps not surprising that the substantial increase in refusal rates has received comparatively little attention. As we will detail, it was not uncommon for a study conducted 20 years ago to have encountered one refusal for every one or two completed interviews, while today experiencing three or more refusals for every one completed interview is commonplace. This trend has led to several concerns that motivate this Task Force. As refusal rates have increased, refusal bias (as a component of nonresponse bias) is an increased threat to the validity of survey results. Of practical concern are the efficacy and cost implications of enhanced efforts to avert initial refusals and convert refusals that do occur. Finally, though no less significant, are the ethical concerns raised by the possibility that efforts to minimize refusals can be perceived as coercive or harassing potential respondents. Indeed, perhaps the most important goal of this document is to foster greater consideration by the reader of the rights of respondents in survey research

    Pro-oxidant Induced DNA Damage in Human Lymphoblastoid Cells: Homeostatic Mechanisms of Genotoxic Tolerance

    Get PDF
    Oxidative stress contributes to many disease etiologies including ageing, neurodegeneration, and cancer, partly through DNA damage induction (genotoxicity). Understanding the i nteractions of free radicals with DNA is fundamental to discern mutation risks. In genetic toxicology, regulatory authorities consider that most genotoxins exhibit a linear relationship between dose and mutagenic response. Yet, homeostatic mechanisms, including DNA repair, that allow cells to tolerate low levels of genotoxic exposure exist. Acceptance of thresholds for genotoxicity has widespread consequences in terms of understanding cancer risk and regulating human exposure to chemicals/drugs. Three pro-oxidant chemicals, hydrogen peroxide (H2O2), potassium bromate (KBrO3), and menadione, were examined for low dose-response curves in human lymphoblastoid cells. DNA repair and antioxidant capacity were assessed as possible threshold mechanisms. H2O2 and KBrO3, but not menadione, exhibited thresholded responses, containing a range of nongenotoxic low doses. Levels of the DNA glycosylase 8-oxoguanine glycosylase were unchanged in response to pro- oxidant stress. DNA repair–focused gene expression arrays reported changes in ATM and BRCA1, involved in double-strand break repair, in response to low-dose pro-oxidant exposure; however, these alterations were not substantiated at the protein level. Determination of oxidatively induced DNA damage in H2O2-treated AHH-1 cells reported accumulation of thymine glycol above the genotoxic threshold. Further, the H2O2 dose-response curve was shifted by modulating the antioxidant glutathione. Hence, observed pro- oxidant thresholds were due to protective capacities of base excision repair enzymes and antioxidants against DNA damage, highlighting the importance of homeostatic mechanisms in “genotoxic tolerance.

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency–Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research

    Iron Behaving Badly: Inappropriate Iron Chelation as a Major Contributor to the Aetiology of Vascular and Other Progressive Inflammatory and Degenerative Diseases

    Get PDF
    The production of peroxide and superoxide is an inevitable consequence of aerobic metabolism, and while these particular "reactive oxygen species" (ROSs) can exhibit a number of biological effects, they are not of themselves excessively reactive and thus they are not especially damaging at physiological concentrations. However, their reactions with poorly liganded iron species can lead to the catalytic production of the very reactive and dangerous hydroxyl radical, which is exceptionally damaging, and a major cause of chronic inflammation. We review the considerable and wide-ranging evidence for the involvement of this combination of (su)peroxide and poorly liganded iron in a large number of physiological and indeed pathological processes and inflammatory disorders, especially those involving the progressive degradation of cellular and organismal performance. These diseases share a great many similarities and thus might be considered to have a common cause (i.e. iron-catalysed free radical and especially hydroxyl radical generation). The studies reviewed include those focused on a series of cardiovascular, metabolic and neurological diseases, where iron can be found at the sites of plaques and lesions, as well as studies showing the significance of iron to aging and longevity. The effective chelation of iron by natural or synthetic ligands is thus of major physiological (and potentially therapeutic) importance. As systems properties, we need to recognise that physiological observables have multiple molecular causes, and studying them in isolation leads to inconsistent patterns of apparent causality when it is the simultaneous combination of multiple factors that is responsible. This explains, for instance, the decidedly mixed effects of antioxidants that have been observed, etc...Comment: 159 pages, including 9 Figs and 2184 reference

    Building a transdisciplinary expert consensus on the cognitive drivers of performance under pressure: An international multi-panel Delphi study

    Get PDF
    IntroductionThe ability to perform optimally under pressure is critical across many occupations, including the military, first responders, and competitive sport. Despite recognition that such performance depends on a range of cognitive factors, how common these factors are across performance domains remains unclear. The current study sought to integrate existing knowledge in the performance field in the form of a transdisciplinary expert consensus on the cognitive mechanisms that underlie performance under pressure.MethodsInternational experts were recruited from four performance domains [(i) Defense; (ii) Competitive Sport; (iii) Civilian High-stakes; and (iv) Performance Neuroscience]. Experts rated constructs from the Research Domain Criteria (RDoC) framework (and several expert-suggested constructs) across successive rounds, until all constructs reached consensus for inclusion or were eliminated. Finally, included constructs were ranked for their relative importance.ResultsSixty-eight experts completed the first Delphi round, with 94% of experts retained by the end of the Delphi process. The following 10 constructs reached consensus across all four panels (in order of overall ranking): (1) Attention; (2) Cognitive Control—Performance Monitoring; (3) Arousal and Regulatory Systems—Arousal; (4) Cognitive Control—Goal Selection, Updating, Representation, and Maintenance; (5) Cognitive Control—Response Selection and Inhibition/Suppression; (6) Working memory—Flexible Updating; (7) Working memory—Active Maintenance; (8) Perception and Understanding of Self—Self-knowledge; (9) Working memory—Interference Control, and (10) Expert-suggested—Shifting.DiscussionOur results identify a set of transdisciplinary neuroscience-informed constructs, validated through expert consensus. This expert consensus is critical to standardizing cognitive assessment and informing mechanism-targeted interventions in the broader field of human performance optimization
    corecore